Faraz Zaidi, Labri, INRIA
Bordeaux - Sud Ouest, faraz.zaidi@labri.fr [PRIMARY contact]
Paolo
Simonetto, Labri, INRIA Bordeaux - Sud Ouest,
paolo.simonetto@labri.fr
Daniel Archambault, INRIA Bordeaux - Sud
Ouest, daniel.archambault@inria.fr
Pierre-Yves Koenig, Labri,
INRIA Bordeaux - Sud Ouest, Pierre-Yves.Koenig@labri.fr
Frédéric
Gilbert, Labri, INRIA Bordeaux - Sud Ouest, frederic.gilbert@labri.fr
Trung-Tien Phan-Quang, Labri, INRIA Bordeaux - Sud Ouest,
phanquan@labri.fr
Ronan Sicre, Labri, sicre@labri.fr
Mathieu
Brulin, Labri, mathieu.brulin@labri.fr
Remy Vieux, Labri,
vieux@labri.fr
Morgan Mathiaut, Labri, mathiaut@labri.fr
Antoine
Lambert, Labri, antoine.lambert@labri.fr
Guy Melançon,
LaBRI,
INRIA Bordeaux - Sud Ouest, [Faculty
advisor]
The Tulip
framework allows for the visualization, drawing, and editing of
graphs. All the parts of the framework have been built in order to
visualize graphs of more than 1,000,000 elements. The system allows
navigation, geometric operations, extraction of subgraphs, metric
computations, graph theoretic operations, and filtering.
The Tulip
architecture provides the following features :
· 3D visualizations
· 3D modifications
· Plug-in support for easy evolution
· Building of clusters and navigation into it
· Automatic drawing of graphs
· Automatic clustering of graphs
· Automatic selection of elements
· Automatic Metric coloration of graphs
Video:
ANSWERS:
MC3.1: Provide a tab-delimitated table containing the location, start time and duration of the events identified above.
MC3.2: Identify any events of potential counterintelligence/espionage interest in the video. Provide a Detailed Answer, including a description of any activities, and why the event is of interest.
The first step in identifying suspicious activities in the videos was to apply motion detection algorithms to track moving objects. Figure 1 presents some example output. The algorithm associates an identification number with each moving object and traces it through all frames while the camera is centred on a camera Location. Object detection algorithms are used to calculate the dimensions of the object and its center of gravity. The center of gravity is represented by a vector x,y telling us the position of the object in the frame. The dimensions of the object are represented by the height and width around the center of gravity in pixels. We ran our system on eight of the ten hours of video.
Figure
1: Motion detection algorithm. Purple boxes highlight detected
motions.
From this information, we can create a node-link diagram representing object trajectories in the video. The position of a tracked object in a particular frame is a node, and edges link this node to its adjacent next and previous frames. Thus, we have a graph for the path of each object over the course of the video. One graph is created by the system for each hour of the video, summarizing the motions during that period of time. An example of this graph is shown in
We simplify this graph by applying different types of filters to remove parts of the graph that are not of interest. The first filter is based on the size of the objects. If an object is less than 15x15 pixels, we remove it, considering it as either noise or a distant object that is too small to be identified. Similarly, objects that lie at the borders can be removed because it can be difficult to resolve what they are. We consider the objects at distance 10 pixels away from the bottom, top, right or left of the screen that can be removed. We also remove common, straight-line trajectories for objects in the scene by grouping paths that are basically straight into sets of vectors. If fifteen or more of these paths exist over the hour period they can all be removed. This allows us to filter cars and pedestrians on the sidewalk. Finally, we remove all motions in the sky and on the facades of buildings. Trajectories that wait, that is objects that only move few pixels over a long period of time, are flagged as waiting and are never filtered automatically. Waiting trajectories are coloured yellow. These trajectories are important because they can indicate that a meeting has occurred. Figure 2 shows an example of this filtered graph in the final hour of video 2.
Figure 2: Illustration of graphs used to model motion trajectories. Trajectories that wait are coloured yellow and all other trajectories are coloured green.
Using the system, we identified a meeting that is potentially of interest. The meeting occurs between a person in black and a second in white. After the end of the meeting, two briefcases are exchanged. The meeting takes place at 3:24 in the second video or at 8:24 in absolute video time. Figure 3 shows the graph representation of this meeting in its filtered hour of video. The meeting is depicted in Figure 1 above. This meeting is interesting because it shows a clear exchange of documents between two parties.
Figure 3: Meeting that takes place at 8:24 in the video. Yellow waiting trajectories are the two people that meet.